Web Survey Bibliography
There is a lot of debate whether questions should be presented on a grid or in a single item per screen. From just an operational point of view, grids take less time to complete, which should decrease response burden, although new research shows that respondents seem to prefer a single item per screen. From a measurement point of view, grids pose numerous issues: higher item non-response, higher Cronbach's alpha, higher item non-differentiation, sometimes higher measurement error but not always.
In this experiment we are testing the Vitality (4 items) and Mental Health (5 items) subscales of the SF-36v2®. The SF-36v2Health Survey asks 36 questions to measure functional health and well-being from the patient's point of view. It is called a generic health survey, because it can be used across age (18 and older), disease, and treatment group, as opposed to a disease-specific health survey which focuses on a particular condition or disease. Two of the four items of the vitality scale and two out of five items of the mental health scale are reversed in meaning.
A sample of 2,500 KnowledgePanel® respondents was randomly assigned to one of five experimental conditions: Group 1: Standard grid; Group 2: Shaded grid; Group 3: One item per screen with horizontal response options; Group 4: One item per screen with vertical response options; Group 5: One item per screen with vertical shaded response options. Approximately 360 respondents completed the survey per condition for a completion rate of 73.4%.
The survey was optimized to be seen on a screen with minimum resolution of 800 by 600 pixels. During the study we collected the browser type for each respondent. This allowed us to exclude cases in which the survey was taken either on a MSNTV or on an iPhone/pda. The final sample used for the analysis, after exclusions, was of 1,449 cases for an average group size of 290.
We hypothesized that items presented on a grid would lead to more measurement error indicated by a lower Cronbach's alpha for and higher "inconsistencies" for the grid presentation. The main reason is that when items are presented on a single screen they allow the respondent to focus more on each question. When items are on a grid it is easier to get confused especially when the meaning of some of them is reversed. The index of consistency was in fact computed by correlating the total sum of scores for the reversed item with the total sum of scores for the non reversed items. If respondents are consistent in their answers the correlation should be higher.
Results are going in the expected directions (lower alpha level for the grid presentation and higher correlation for the single item presentation), although the differences among groups do not reach statistical significance.
Homepage (Abstract)
Web survey bibliography - 2010 (251)
- Running experiments on Amazon Mechanical Turk; 2010; Paolacci, G., Chandler, J., Ipeirotis, P. G.
- Making Good Use of Survey Paradata; 2010; Lynn, P., Nicolaas, G.
- Questionnaire Length, Fatigue Effects and Response Quality Revisited; 2010; Cape, P. J.
- Preventing Satisficing in Online Surveys: A “Kapcha” to Ensure Higher Quality Data...; 2010; Chandler, D., Kapelner, A.
- Game on; 2010; Ewing, T.
- Respondent Engagement: How Much Does it Matter?; 2010
- The Internet and Social Inequalities; 2010; Mannon, S.E.; Witte, J. C.
- Need to Improve Routine HIV Testing of U.S. Veterans in Care: Results of an Internet Survey; 2010; Valdiserri, R. O., Nazi, K., McInnes, D. K., Ross, D., Kinsinger, L.
- The Prevalence of Chronic Pain in United States Adults: Results of an Internet-Based Survey; 2010; Johannes, C. B., Le, T. K., Zhou, X., Johnston, J. A., Dworkin, R. H.
- Response Rates in Organizational Science, 1995–2008: A Meta-analytic Review and Guidelines for...; 2010; Anseel, F., Lievens, F., Schollaert, E., Choragwicka, B.
- Marketing Research: Methodological Foundations; 2010; Iacobucci, D., Churchill, G.A. Jr.
- Computer Assisted Interview Testing Tool (CTT) - a review of new features and how the tool has improved...; 2010; Stark, R., Gatward, R.
- Address-based Sampling Nets Success for KnowledgePanel® Recruitment and Sample Representation; 2010; DiSogra, C.
- A method of automated nonparametric content analysis for social science; 2010; Hopkins, D. J., King, G.
- Developing a web explicit research strategy theory in African universities: a cross-comparison of specific...; 2010; Kirigha, K. A., Neema-A.
- The use of paradata to monitor and manage survey data collection; 2010; Kreuter, F., Couper, M. P., Lyberg, L. E.
- Mitigating Online Survey Nonresponse Error In Aviation Research; 2010; Ison, D. C.
- Optimizing response rates in online surveys; 2010; Kaczmirek, L.
- The Decision Maker's Guide to Online Research; 2010
- Mixed-Method Approaches to Social Network Analysis; 2010; Edwards, G.
- Measuring Intent to Participate and Participation in the 2010 Census and Their Correlates and Trends...; 2010; Pasek, J., Krosnick, J. A.
- Nonresponse and Measurement Error in Mobile Phone Surveys ; 2010; Kennedy, C.
- Wordle; 2010; Feinberg, J.
- What it takes to be a top 100 website; 2010
- Total Survey Error: past, present, and future; 2010; Groves, R. M., Lyberg, L. E.
- There is an app for that! A review of smartphone apps for marketing research; 2010; Michelson, M.
- The who, what, and where of America: Understanding the American Community Survey; 2010; Gaquin, D. A.
- The weirdest people in the world?; 2010; Heine, S. J., Henrich, J., Norenzayan, A.
- The state of online research in the U.S.; 2010; Miller, J.
- The psychology or survey response. An ASA webinar; 2010; Tourangeau, R.
- The psychology of survey response, 2nd Edition; 2010; Tourangeau, R., Bradburn, N. M.
- The multidimensional integral business survey response model; 2010; Bavdaz, M.
- The impact of next and back buttons on time to complete and measurement reliability in computer-based...; 2010; Hays, R. D., Bode, R., Rothrock, N., Riley, W., Cella, D., Gershon, R.
- The Gallup Poll: Public opinion 2009; 2010; Gallup, A. M.
- Surveying cultures: Discovering shared conceptions and sentiments; 2010; Heise, D. R.
- Site-intercpet survey best practices; 2010; Henning, J.
- Sampling: design and analysis, 2nd Edition; 2010; Lohr, S. L.
- Research synthesis. AAPOR report on online panels; 2010; Brick, J. M., Baker, R., Blumberg, S. J., Couper, M. P., Courtright, M., Dennis, J. M., Dillman, D....
- Recruiting probability samples for a multi-mode research panel with Internet and mail components; 2010; Rao, K.
- Real ID. State of The Art Representative and Repeatable Online Samples. Behaviorally Profiled Respondents...; 2010; Gittelman, S. H., Trimarchi, E.
- Randomized response and indirect questioning techniques in surveys; 2010; Chaudhuri, A.
- Protecting and accessing data from the survey of earned doctorates: A workshop summary; 2010; Plewes, T. J.
- Paradata: a new data source from web-administered measures; 2010; Sowan, A. K., Jenkins, L. S.
- Overview of data collection methodology; 2010
- On-the-go and in-the-moment. Mobile research offers speed, immediacy; 2010; Pferdekamper, T.
- Mixed-mode surveys; 2010; Dillman, D. A., Messer, B. L.
- Measuring the group quarters population in the American Community Survey: Interim report; 2010; Marton, K., Voss, P. R.
- Measures of interobserver agreement and reliability; 2010; Shoukri, M. M.
- Machines that lean how to code open ended survey data; 2010; Esuli, A., Sebastiani, F.
- Libraries nationwide receiving ALA-APA Library Salary Survey Invitation; 2010; Grady, J.